1,201 research outputs found

    Sufficient Covariate, Propensity Variable and Doubly Robust Estimation

    Full text link
    Statistical causal inference from observational studies often requires adjustment for a possibly multi-dimensional variable, where dimension reduction is crucial. The propensity score, first introduced by Rosenbaum and Rubin, is a popular approach to such reduction. We address causal inference within Dawid's decision-theoretic framework, where it is essential to pay attention to sufficient covariates and their properties. We examine the role of a propensity variable in a normal linear model. We investigate both population-based and sample-based linear regressions, with adjustments for a multivariate covariate and for a propensity variable. In addition, we study the augmented inverse probability weighted estimator, involving a combination of a response model and a propensity model. In a linear regression with homoscedasticity, a propensity variable is proved to provide the same estimated causal effect as multivariate adjustment. An estimated propensity variable may, but need not, yield better precision than the true propensity variable. The augmented inverse probability weighted estimator is doubly robust and can improve precision if the propensity model is correctly specified

    Experimental and Numerical Comparison of Flexural Capacity of Light Gage Cold Formed Steel Roof Deck

    Get PDF
    This paper presents an analysis of experimental data and compares it to two numerical analysis methods of light gage cold formed steel roof deck. The flexural capacity was determined upon the first failure mode of the light gage cold formed steel roof deck. A comparison of the experimental data was made to both the effective width method and the direct strength method. The objective of the comparison was to have a physical test provide the actual behavior of the light gage cold formed steel roof deck and grade how well the numerical analysis, effective width and direct strength methods, compare against the results. Material testing samples were taken from the steel roof deck and evaluated for the actual yield stress. This allowed for the most accurate comparison between the experimental results with the numerical analysis since the exact yield strength was used in calculation. It was found that the effective width method and the direct strength method vary in their prediction of the nominal moment capacity across material grades and deck thickness but tend to converge to a constant ratio, MnDSM /Mn EWM, at thicker deck gages. The effective width method was found to be more accurate for thinner gage steel roof deck, while the direct strength method was found to be more accurate for thicker gage steel roof deck. The effective width method is better at predicting the strength of steel roof deck, particularly the thinner gage ones, while the direct strength method provided a much quicker process to find the flexural capacity of the deck. Both methods can be used to determine the capacity of the deck and it is up to the end user to determine which method is appropriate for the given application

    Comparison of Experimental and Numerical Results for Flexural Capacity of Light-Gage Steel Roof Deck

    Get PDF
    The objective of this paper is to present a comparison between experimental results to each of two numerical analyses of cold formed steel roof deck in flexure. Prior numerical studies using the Direct Strength Method (DSM) and the Equivalent Width Method (EWM) have shown discrepancies between results obtained by the two methods. The goal of this research initiative was to compare results from each of the two numerical analysis methods to experimental results in an effort to determine which numerical method is most appropriate for analyzing steel deck in flexure. Twenty-four physical tests were conducted using four different deck gages (22, 20, 18 and 16 gage) in both the deck’s positive and negative positions. Detailed measurements of the physical geometry and the material properties of the deck samples were taken. Load was applied in a four-point bending scenario using a loading frame that engaged all flutes across the width of the deck sample. Deck was loaded to failure. Applied load and several displacement measurements were recorded. Maximum load measurements and load-displacement plots were used to determine the maximum moment capacity in the deck. Finite strip modeling using CUFSM v4.03 was conducted and analyses using the DSM and EWM are compared to experimental results. It was found that the DSM and EWM vary in their prediction of the nominal moment capacity across material grades and deck thicknesses, but tend to converge to a constant ratio at higher deck gages. The EWM was found to be more accurate for thinner gages and the DSM was found to be more accurate for thicker gages, but both methods provide reasonable results when determining steel roof deck capacities

    Significance in gamma-ray astronomy - the Li & Ma problem in Bayesian statistics

    Full text link
    The significance of having detected an astrophysical gamma ray source is usually calculated by means of a formula derived by Li & Ma in 1983. We solve the same problem in terms of Bayesian statistics, which provides a logically more satisfactory framework. We do not use any subjective elements in the present version of Bayesian statistics. We show that for large count numbers and a weak source the Li & Ma formula agrees with the Bayesian result. For other cases the two results differ, both due to the mathematically different treatment and the fact that only Bayesian inference can take into account prior knowldege.Comment: 12 pages, 3 figures, accepted for publication in A&

    Statistical Geometry in Quantum Mechanics

    Full text link
    A statistical model M is a family of probability distributions, characterised by a set of continuous parameters known as the parameter space. This possesses natural geometrical properties induced by the embedding of the family of probability distributions into the Hilbert space H. By consideration of the square-root density function we can regard M as a submanifold of the unit sphere in H. Therefore, H embodies the `state space' of the probability distributions, and the geometry of M can be described in terms of the embedding of in H. The geometry in question is characterised by a natural Riemannian metric (the Fisher-Rao metric), thus allowing us to formulate the principles of classical statistical inference in a natural geometric setting. In particular, we focus attention on the variance lower bounds for statistical estimation, and establish generalisations of the classical Cramer-Rao and Bhattacharyya inequalities. The statistical model M is then specialised to the case of a submanifold of the state space of a quantum mechanical system. This is pursued by introducing a compatible complex structure on the underlying real Hilbert space, which allows the operations of ordinary quantum mechanics to be reinterpreted in the language of real Hilbert space geometry. The application of generalised variance bounds in the case of quantum statistical estimation leads to a set of higher order corrections to the Heisenberg uncertainty relations for canonically conjugate observables.Comment: 32 pages, LaTex file, Extended version to include quantum measurement theor

    Computational algebraic methods in efficient estimation

    Full text link
    A strong link between information geometry and algebraic statistics is made by investigating statistical manifolds which are algebraic varieties. In particular it it shown how first and second order efficient estimators can be constructed, such as bias corrected Maximum Likelihood and more general estimators, and for which the estimating equations are purely algebraic. In addition it is shown how Gr\"obner basis technology, which is at the heart of algebraic statistics, can be used to reduce the degrees of the terms in the estimating equations. This points the way to the feasible use, to find the estimators, of special methods for solving polynomial equations, such as homotopy continuation methods. Simple examples are given showing both equations and computations. *** The proof of Theorem 2 was corrected by the latest version. Some minor errors were also corrected.Comment: 21 pages, 5 figure

    The Eurace@Unibi Model: An Agent-Based Macroeconomic Model for Economic Policy Analysis

    Get PDF
    Dawid H, Gemkow S, Harting P, van der Hoog S, Neugart M. The Eurace@Unibi Model: An Agent-Based Macroeconomic Model for Economic Policy Analysis. Working Papers in Economics and Management. Vol 05-2012. Bielefeld: Bielefeld University, Department of Business Administration and Economics; 2012.This document provides a description of the modeling assumptions and economic features of the Eurace@Unibi model. Furthermore, the document shows typical patterns of the output generated by this model and compares it to empirically observable stylized facts. The Eurace@Unibi model provides a representation of a closed macroeconomic model with spatial structure. The main objective is to provide a micro-founded macroeconomic model that can be used as a unified framework for policy analysis in different economic policy areas and for the examination of generic macroeconomic research questions. In spite of this general agenda the model has been constructed with certain specific research questions in mind and therefore certain parts of the model, e.g. the mechanisms driving technological change, have been worked out in more detail than others. The purpose of this document is to give an overview over the model itself and its features rather than discussing how insights into particular economic issues can be obtained using the Eurace@Unibi model. The model has been designed as a framework for economic analysis in various domains of economics. A number of economic issues have been examined using (prior versions of) the model (see Dawid et al. (2008), Dawid et al. (2009), Dawid et al. (2011a), Dawid and Harting (2011), van der Hoog and Deissenberg (2011), Cincotti et al. (2010)) and recent extensions of the model have substantially extended its applicability in various economic policy domains, however results of such policy analyses will be reported elsewhere. Whereas the overall modeling approach, the different modeling choices and the economic rationale behind these choices is discussed in some detail in this document, no detailed description of the implementation is given. Such a detailed documentation is provided in the accompanying document Dawid et al. (2011b)

    Spatial interactions in agent-based modeling

    Full text link
    Agent Based Modeling (ABM) has become a widespread approach to model complex interactions. In this chapter after briefly summarizing some features of ABM the different approaches in modeling spatial interactions are discussed. It is stressed that agents can interact either indirectly through a shared environment and/or directly with each other. In such an approach, higher-order variables such as commodity prices, population dynamics or even institutions, are not exogenously specified but instead are seen as the results of interactions. It is highlighted in the chapter that the understanding of patterns emerging from such spatial interaction between agents is a key problem as much as their description through analytical or simulation means. The chapter reviews different approaches for modeling agents' behavior, taking into account either explicit spatial (lattice based) structures or networks. Some emphasis is placed on recent ABM as applied to the description of the dynamics of the geographical distribution of economic activities, - out of equilibrium. The Eurace@Unibi Model, an agent-based macroeconomic model with spatial structure, is used to illustrate the potential of such an approach for spatial policy analysis.Comment: 26 pages, 5 figures, 105 references; a chapter prepared for the book "Complexity and Geographical Economics - Topics and Tools", P. Commendatore, S.S. Kayam and I. Kubin, Eds. (Springer, in press, 2014

    Criteria of efficiency for conformal prediction

    Get PDF
    We study optimal conformity measures for various criteria of efficiency of classification in an idealised setting. This leads to an important class of criteria of efficiency that we call probabilistic; it turns out that the most standard criteria of efficiency used in literature on conformal prediction are not probabilistic unless the problem of classification is binary. We consider both unconditional and label-conditional conformal prediction.Comment: 31 page

    Effects of Epistasis and Pleiotropy on Fitness Landscapes

    Full text link
    The factors that influence genetic architecture shape the structure of the fitness landscape, and therefore play a large role in the evolutionary dynamics. Here the NK model is used to investigate how epistasis and pleiotropy -- key components of genetic architecture -- affect the structure of the fitness landscape, and how they affect the ability of evolving populations to adapt despite the difficulty of crossing valleys present in rugged landscapes. Populations are seen to make use of epistatic interactions and pleiotropy to attain higher fitness, and are not inhibited by the fact that valleys have to be crossed to reach peaks of higher fitness.Comment: 10 pages, 6 figures. To appear in "Origin of Life and Evolutionary Mechanisms" (P. Pontarotti, ed.). Evolutionary Biology: 16th Meeting 2012, Springer-Verla
    • …
    corecore